354 research outputs found

    Combined use of cytological, visual and colposcopic examination for evaluation of unhealthy cervix and their histopathological correlation

    Get PDF
    Background: Unhealthy cervix can be a presentation of a broad spectrum of clinical conditions with diverse pathologies like infective, inflammatory, reactive and neoplastic etc. Cervical cancer, having a multifactorial causation, is the second most common cancer in female population. Because of a prolonged preinvasive phase, the cancer can be diagnosed at an earlier stage. Early diagnosis makes it amenable to treatment.Methods: A total 100 ladies attending gynaecology OPD of a tertiary care teaching hospital with unhealthy cervix were evaluated. It involved history taking, cytological assessment by Pap smear, examination of cervix after acetic acid application (VIA), colposcopic assessment and biopsy for histopathological evaluation.Results: Correlation of all these modalities to rule out neoplastic aetiology showed a high specificity of 91.9%. The positive predictive value of combined approach was found to be 65% whereas negative predictive value approached 100%.Conclusions: Combined approach with VIA, Colposcopy, Pap smear and Directed biopsy provide a comprehensive evaluation of unhealthy cervix

    Learning Features by Watching Objects Move

    Full text link
    This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as 'pseudo ground truth' to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed 'pretext' tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce.Comment: CVPR 201

    Human-to-Robot Imitation in the Wild

    Full text link
    We approach the problem of learning by watching humans in the wild. While traditional approaches in Imitation and Reinforcement Learning are promising for learning in the real world, they are either sample inefficient or are constrained to lab settings. Meanwhile, there has been a lot of success in processing passive, unstructured human data. We propose tackling this problem via an efficient one-shot robot learning algorithm, centered around learning from a third-person perspective. We call our method WHIRL: In-the-Wild Human Imitating Robot Learning. WHIRL extracts a prior over the intent of the human demonstrator, using it to initialize our agent's policy. We introduce an efficient real-world policy learning scheme that improves using interactions. Our key contributions are a simple sampling-based policy optimization approach, a novel objective function for aligning human and robot videos as well as an exploration method to boost sample efficiency. We show one-shot generalization and success in real-world settings, including 20 different manipulation tasks in the wild. Videos and talk at https://human2robot.github.ioComment: Published at RSS 2022. Demos at https://human2robot.github.i
    • …
    corecore